Autonomous Learning Evaluation toward Active Motor Babbling
نویسندگان
چکیده
Learning in robotics is one of the practical solutions allowing an autonomous robot to perceive its body and the environment. As discussed in the context of the frame problem [1], the robot’s body and the environment are too complex to be modeled. Even if the kinematics and the dynamics of the body are known, a real sensory input to the body would be different to one derived from the theoretical model, because the sensory input is always influenced by the interaction with the environment. For instance, when we grasp an object, the physical state of our arm such as a weight and momentum becomes different to those at the normal state. However, it is difficult to evaluate all potential variation in advance, since real data can vary quite a lot and the behavior of the external environment is not necessarily controlled by the robot: in this example, the state of the arm is always different depending on the grasped object. On the other hand, learning provides a data-driven solution: the robot explores the environment and extracts knowledge to build an internal model of the body and the environment. Learning-based motor control system is well studied in the literature [2] [3] [4] [5] [6] [7]. Haruno et al. proposed a modular control approach [3], which couples a forward model (state predictor) and an inverse model (controller). The forward model predicts the next state from a current state and a motor command (an efference copy), while the inverse model generates a motor command from the current state and the predicted state. The desired motor command is not available, but the feedback error learning procedure (FEL) provides a suitable approximation [4]. The prediction error contributes to gate learning of the forward and inverse models, and to weight output of the inverse models for the final motor command. Motor prediction based on a copy of motor command compensates the delays and noise in the sensorimotor system. Moreover, motor prediction allows differentiating self-generated movements from externally imposed forces/disturbances [5][6]. Learning-based perception is applicable not only for motor control but also to model the environment owing to multiple sensorial modalities, such as vision, audition, touch, force/torque, and acceleration sensing. In a similar approach, we developed a learning system aiming at predicting future sensing data based
منابع مشابه
Learning Distinctions and Rules in a Continuous World Through Active Exploration
We present a method that allows an agent through active exploration to autonomously build a useful representation of its environment. The agent builds the representation by iteratively learning distinctions and predictive rules using those distinctions. We build on earlier work in which we showed that by motor babbling an agent could learn a representation and predictive rules that by inspectio...
متن کاملActive learning of inverse models with intrinsically motivated goal exploration in robots
We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distributio...
متن کاملLearning how to reach various goals by autonomous interaction with the environment: unification and comparison of exploration strategies
In the field of developmental robotics, we are particularly interested in the exploration strategies which can drive an agent to learn how to reach a wide variety of goals. In this paper, we unify and compare such strategies, recently shown to be efficient to learn complex non-linear redundant sensorimotor mappings. They combine two main principles. The first one concerns the space in which the...
متن کاملLearning a Set of Interrelated Tasks by Using a Succession of Motor Policies for a Socially Guided Intrinsically Motivated Learner
We propose an active learning algorithmic architecture, capable of organizing its learning process in order to achieve a field of complex tasks by learning sequences of primitive motor policies : Socially Guided Intrinsic Motivation with Procedure Babbling (SGIM-PB). The learner can generalize over its experience to continuously learn new outcomes, by choosing actively what and how to learn gui...
متن کاملLearning a Set of Interrelated Tasks by Using a Succession of Motor Policies for a Socially Guided Intrinsically Motivated Learner
We propose an active learning algorithmic architecture, capable of organizing its learning process in order to achieve a field of complex tasks by learning sequences of primitive motor policies : Socially Guided Intrinsic Motivation with Procedure Babbling (SGIM-PB). The learner can generalize over its experience to continuously learn new outcomes, by choosing actively what and how to learn gui...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008